17 research outputs found

    Developing an Efficient Secure Query Processing Algorithm on Encrypted Databases using Data Compression

    Get PDF
    Distributed computing includes putting aside the data utilizing outsider storage and being able to get to this information from a place at any time. Due to the advancement of distributed computing and databases, high critical data are put in databases. However, the information is saved in outsourced services like Database as a Service (DaaS), security issues are raised from both server and client-side. Also, query processing on the database by different clients through the time-consuming methods and shared resources environment may cause inefficient data processing and retrieval. Secure and efficient data regaining can be obtained with the help of an efficient data processing algorithm among different clients. This method proposes a well-organized through an Efficient Secure Query Processing Algorithm (ESQPA) for query processing efficiently by utilizing the concepts of data compression before sending the encrypted results from the server to clients. We have addressed security issues through securing the data at the server-side by an encrypted database using CryptDB. Encryption techniques have recently been proposed to present clients with confidentiality in terms of cloud storage. This method allows the queries to be processed using encrypted data without decryption. To analyze the performance of ESQPA, it is compared with the current query processing algorithm in CryptDB. Results have proven the efficiency of storage space is less and it saves up to 63% of its space.

    Identifying Difficult exercises in an eTextbook Using Item Response Theory and Logged Data Analysis

    Full text link
    The growing dependence on eTextbooks and Massive Open Online Courses (MOOCs) has led to an increase in the amount of students' learning data. By carefully analyzing this data, educators can identify difficult exercises, and evaluate the quality of the exercises when teaching a particular topic. In this study, an analysis of log data from the semester usage of the OpenDSA eTextbook was offered to identify the most difficult data structure course exercises and to evaluate the quality of the course exercises. Our study is based on analyzing students' responses to the course exercises. We applied item response theory (IRT) analysis and a latent trait mode (LTM) to identify the most difficult exercises .To evaluate the quality of the course exercises we applied IRT theory. Our findings showed that the exercises that related to algorithm analysis topics represented the most difficult exercises, and there existing six exercises were classified as poor exercises which could be improved or need some attention.Comment: 6 pages,5 figure

    A Predictive Model for Student Performance in Classrooms using Student Interactions with an eTextbook

    Get PDF
    With the rise of online eTextbooks and Massive Open Online Courses (MOOCs), a huge amount of data has been collected related to students’ learning. With the careful analysis of this data, educators can gain useful insights into their students’ performance and their behavior in learning a particular topic. This paper proposes a new model for predicting student performance based on an analysis of how students interact with an interactive online eTextbook. By being able to predict students’ performance early in the course, educators can easily identify students at risk and provide a suitable intervention. We considered two main issues: the prediction of good/bad performance and the prediction of the final exam grade. To build the proposed model, we evaluated the most popular classification and regression algorithms. Random Forest Regression and Multiple Linear Regression have been applied in Regression. While Logistic Regression, decision tree, Random Forest Classifier, K Nearest Neighbors, and Support Vector Machine have been applied in classification. Based on the findings of the experiments, the algorithm with the best result overall in classification was Random Forest Classifier with an accuracy equal to 91.7%, while in the regression it was Random Forest Regression with an R2 equal to 0.977

    Integration of Computer Vision and Natural Language Processing in Multimedia Robotics Application

    Get PDF
    Computer vision and natural language processing (NLP) are two active machine learning research areas. However, the integration of these two areas gives rise to a new interdisciplinary field, which is currently attracting more attention of researchers. Research has been carried out to extract the text associated with an image or a video that can assist in making computer vision effective. Moreover, researchers focus on utilizing NLP to extract the meaning of words through the use of computer vision. This concept is widely used in robotics. Although robots should observe the surroundings from different ways of interactions, natural gestures and spoken languages are the most convenient way for humans to interact with the robots. This would be possible only if the robots can understand such types of interactions. In the present paper, the proposed integrated application is utilized for guiding vision-impaired people. As vision is the most essential in the life of a human being, an alternative source that helps in guiding the blind in their movements is highly important. For this purpose, the current paper uses a smartphone with the capabilities of vision, language, and intelligence which has been attached to the blind person to capture the images of their surroundings, and it is associated with a Faster Region Convolutional Neural Network (F-RCNN) based central server to detect the objects in the image to inform the person about them and avoid obstacles in their way. These results are passed to the smartphone which produces a speech output for the guidance of the blinds

    Self-adaptive DNA-based Steganography Using Neural Networks

    Get PDF
    Steganography is the science of concealing the secret information within a digital cover-object such as the files text, image, video and etc. Recently, the deoxyribonucleic acid (DNA) sequences are used as a cover-object for data hiding. In this work, an effective algorithm called the self-adaptive DNABS (DNA-based steganography) is proposed. This algorithm is applied for data hiding without changing the function or the type of the original DNA protein. It is implemented using a DNA-based steganography and a Neural Network (backpropagation) algorithm to achieve a lower cracking probability than other techniques. The performance of the algorithm is analyzed and tested by measuring four parameters: the embedding capacity, data payload, cracking probability and the bits per nucleotide (bpn)

    Identification of olive leaf disease through optimized deep learning approach

    No full text
    The production of olives in Saudi Arabia, which accounts for around 6% of worldwide output, is regarded as one of the best in the world. Because olive trees are rain-fed and produced using conventional methods, yields vary greatly each year, which is made worse by viral illnesses and climate change. Therefore, it is necessary to identify plant illnesses early on. Farmers diagnose plant illnesses using conventional visual assessment or laboratory analysis. Diagnosing illnesses affecting olive leaves has been improved with deep learning (DL). To identify and categorize plant illnesses, this research introduces an Optimized Artificial Neural Network (ANN) that analyses the plant's leaf. Data is first integrated for preprocessing, relevant features are extracted, and the Whale Optimization Algorithm (WOA) is used to select necessary features. Then the data is classified using ANN. The ANN classification approach utilizes the feed-forward neural network method (FFNN). ANN is a highly adaptable technology being utilized widely to address various problems. This study applies categorization to exclude possibilities throughout each stage, improving prediction accuracy. Compared to the current model employed for plant disease detection, the suggested model showed a considerable performance increase in Precision, Recall, Accuracy, and F1-measure
    corecore